Code
head(data)You are here:
📘 Part I — Theory ✅
🔧 Part II — Tutorial (this lecture)
🧪 Part III — Assignment (next)
Access workshop material here.
In Part I, we explored how Bayesian reasoning helps us understand whether effects are present or not.
Now in Part II, we’ll put that into practice.
You’ll learn how to run Bayesian network analyses using easybgm and JASP, interpret the output, and understand how priors and uncertainty shape your conclusions.
By the end of this session, you’ll be able to:
In Part III, you’ll do all this on your own!
Before we dive into the hands-on part, let’s take a moment to remind ourselves what we covered in Part I and why it matters for what we’re about to do.
In Part I, we discussed the two fundamental questions in network analysis.
The focus in Part I was on testing for an effect (i.e., Question 1), but in Part II we also consider estimating the size and sign of an effect (i.e., Question 2).
We discussed that network estimation cannot tell us why some effects are absent. But
the Bayesian approach helps distinguish absence of evidence from evidence of absence.
Why is this important? Think about when you’ve seen null effects in published network models. Were they truly absent? Or just underpowered?
We use prior probabilities to reflect uncertainty in structure before we see the data, and update them with data using Bayes’ Rule.
What is a posterior probability? It indicates the relative plausibility that a model or structure generated the observed data.
Bayes factors compare how well two competing hypotheses could predict the observed data: \[ \text{BF}_{10} = \frac{P(\text{data} \mid \mathcal{H}_1)}{P(\text{data} \mid \mathcal{H}_0)}. \]
A Bayes factor is a continuous measure of evidence not just a binary decision. While Bayes factors are continuous, we often use interpretive categories to summarize them:
Bayesian network models are powerful, but historically they’ve been out of reach for researchers without a background in programming or statistical computing.
The Bayesian Graphical Modeling Lab contributes to an ecosystem of open tools that make these methods more accessible. Our goal is to help researchers explore network structures using principled Bayesian inference — without having to write complex MCMC code.
We work to make Bayesian graphical modeling both transparent and approachable, whether you use R or point-and-click interfaces like JASP.
There are three main computational packages used in Bayesian network modeling. Each makes different assumptions and offers different features. Here’s a comparison:
| Package | Data Types | Structure Learning | Estimation | Priors | Inference | Highlights |
|---|---|---|---|---|---|---|
| BDgraph |
Continuous, Binary, Ordinal, Count† | ✅ Yes | ❌ No | Bernoulli | Model-averaged | Structure learning for mixed data |
| BGGM |
Continuous, Binary, Ordinal† | ❌ No | ✅ Yes | None | Single model | Edge estimation and network comparison |
| bgms |
Binary, Ordinal (Continuous in dev) | ✅ Yes | ✅ Yes | Bernoulli, Beta-Bernoulli, Stochastic Block | Model-averaged | Structure learning + estimation, network comparison, clustering, missing data imputation |
† BDgraph and BGGM use probit models for ordinal/binary data, not Markov random field models.
easybgm and JASPRather than asking you to choose and learn each package separately, we use easybgm — a unified interface that wraps around the best parts of these engines.
Depending on your data and choices, easybgm automatically uses the appropriate back-end:
BDgraph, bgms, or (in some cases) BGGM.
We set up our analysis by loading a dataset containing responses from a sample of 2,000 individuals on 13 psychological indicators. The data includes three scale scores, and ten ordinal items.
# install.packages("easybgm")
library(easybgm)
data = read.csv("https://raw.githubusercontent.com/Bayesian-Graphical-Modelling-Lab/BGM_Workshops/refs/heads/NetPsy_COMO_2025/Part%20II/exampledata.csv")[, -1]head(data)Launch JASP and locate the main toolbar.
.csv file and click Open.After importing, you should see your dataset listed in the main workspace.
+) at the top-right of the JASP window to open the modules menu.Once loaded, you’ll see a new menu item: “Bayesian Network” under the Network tab in the top menu bar.
Before we fit the model, it’s useful to explore the function we’ll use:
?easybgmHere’s the full function signature to get oriented:
easybgm(
data,
type,
package = NULL,
not_cont = NULL,
iter = 10000,
save = FALSE,
centrality = FALSE,
progress = TRUE,
posterior_method = "model-averaged",
...
)The type argument tells easybgm what kind of data you’re working with — for example, "continuous", "binary", "ordinal", or "mixed". The package argument chooses the estimation engine.
To keep things fast and simple for this example, we’ll treat the data as if it were continuous (yes, even though it isn’t fully), and use BDgraph to fit the model.
Now let’s fit the model and summarize the results.
Fill in the blanks below to use the easybgm() function on the workshop dataset and inspect the summary(fit) output.
fit = easybgm(data = data,
type = ____________,
package = ____________,
iter = 1e4,
save = TRUE,
centrality = TRUE,
progress = TRUE)
summary(fit) ✅ What is the total number of edges?
✅ Which edge(s) are excluded with high certainty?
This mirrors Task 1.1 in the assignment.
fit = easybgm(data = data,
type = "continuous",
package = "BDgraph",
iter = 1e4,
save = TRUE,
centrality = TRUE,
progress = TRUE)
summary(fit) BAYESIAN ANALYSIS OF NETWORKS
Model type: ggm
Number of nodes: 13
Fitting Package: bdgraph
---
EDGE SPECIFIC OVERVIEW
Relation Estimate Posterior Incl. Prob. Inclusion BF Category
depression-stress 0.599 1.00 Inf included
depression-anxiety 0.333 1.00 Inf included
stress-anxiety 0.206 1.00 Inf included
depression-extraverted 0.000 0.00 0.000 excluded
stress-extraverted 0.000 0.01 0.010 excluded
... ... ... ... ...
Bayes Factors larger than 10 were considered sufficient evidence for the classification
Bayes factors were obtained using Bayesian model-averaging.
---
AGGREGATED EDGE OVERVIEW
Number of edges with sufficient evidence for inclusion: 43
Number of edges with insufficient evidence: 8
Number of edges with sufficient evidence for exclusion: 27
Number of possible edges: 78
---
STRUCTURE OVERVIEW
Number of visited structures: 17
Number of possible structures: 3.022315e+23
Posterior probability of most likely structure: 0.2856
---By default, easybgm uses an evidence threshold (Bayes factor) of 10:
You can adjust this threshold to make the evidence criterion more or less strict.
Change the evidence threshold in summary(fit, evidence_thresh = ___) and see how the number of included and excluded edges changes.
✅ Which edges change their classification?
✅ What happens when you use a very lenient threshold (e.g., 1.5)?
This gives you practice for adjusting evidence sensitivity in Part III.
summary(fit, evidence_thresh = 3) BAYESIAN ANALYSIS OF NETWORKS
Model type: ggm
Number of nodes: 13
Fitting Package: bdgraph
---
EDGE SPECIFIC OVERVIEW
Relation Estimate Posterior Incl. Prob. Inclusion BF Category
depression-stress 0.599 1.00 Inf included
depression-anxiety 0.333 1.00 Inf included
stress-anxiety 0.206 1.00 Inf included
depression-extraverted 0.000 0.00 0.000 excluded
stress-extraverted 0.000 0.01 0.010 excluded
... ... ... ... ...
Bayes Factors larger than 3 were considered sufficient evidence for the classification
Bayes factors were obtained using Bayesian model-averaging.
---
AGGREGATED EDGE OVERVIEW
Number of edges with sufficient evidence for inclusion: 44
Number of edges with insufficient evidence: 3
Number of edges with sufficient evidence for exclusion: 31
Number of possible edges: 78
---
STRUCTURE OVERVIEW
Number of visited structures: 17
Number of possible structures: 3.022315e+23
Posterior probability of most likely structure: 0.2856
---In JASP, there is no automatic classification table showing which edges are included, excluded, or inconclusive.
But don’t worry, you can determine this yourself by inspecting the inclusion Bayes factors in the output table. 💪
(And we believe in you!)
✅ A visual summary is available: the edge evidence plot, which we’ll explore in the next section.
The summary table is comprehensive but can become hard to interpret. Visualizations often help reveal patterns and structure more intuitively.
Start with the edge evidence plot:
plot_edgeevidence(fit, evidence_thresh = 10, legend = FALSE)This plot categorizes edges by their Bayes factor:
You can lower the threshold to see how evidence strength varies:
plot_edgeevidence(fit, evidence_thresh = 3, legend = FALSE)If the network appears cluttered (a so-called “spaghetti plot”), use the split option to separate edges with any evidence for inclusion from those with evidence for exclusion.
Use plot_edgeevidence() with and without the split = TRUE option.
✅ Which edge categories become clearer when you split the plot?
✅ Do your visual conclusions match the summary table?
You’ll generate and interpret these plots in Task 1.2.
par(mfrow = c(1, 2))
plot_edgeevidence(fit, evidence_thresh = 10, split = TRUE, legend = FALSE)The left panel shows edges with BF > 1; the right panel shows those with BF ≤ 1.
In JASP, you can plot the edge evidence using evidence categorization.
Once we’ve established which edges have credible evidence for inclusion, we can inspect the strength and direction of those connections. But which ones should we show?
In Bayesian model averaging, not all edges have convincing evidence. Some might be weak or ambiguous. By default, easybgm visualizes the median probability model, the network consisting of all edges with a posterior inclusion probability > 0.5 (or equivalently, Bayes Factor > 1). You can adjust this threshold using the exc_prob argument:
plot_network(fit, exc_prob = 0.5)Under the hood, easybgm uses qgraph, meaning you can pass any of its arguments to fully control the layout and style. For example, grouping nodes or setting colors.
Run plot_network() with and without custom arguments like groups or layout.
✅ Does changing exc_prob influence which edges are shown?
✅ Can you group variables by scale or type?
This corresponds to Task 2.1 in the assignment.
Names_data = colnames(data)
groups_data = c(rep("DASS", 3), rep("Personality", 10))
plot_network(fit,
exc_prob = 0.5,
layout = "spring",
nodeNames = Names_data,
groups = groups_data,
color = c("#fbb20a", "#E59866"),
theme = "Fried",
dashed = TRUE)The network plot shows posterior means, but this hides uncertainty. A useful complement is the posterior highest density interval (HDI), which shows how stable the estimates are
This plot highlights which posterior distributions are tightly concentrated, and which are uncertain, helping you interpret results more cautiously.
Use plot_parameterHDI() to inspect parameter uncertainty.
✅ Which edges have narrow (precise) intervals?
✅ Are any edge estimates near zero?
Compare the plots for edge selection vs. full models in Task 2.3.
plot_parameterHDI(fit)JASP does not (yet) include a Highest Density Interval (HDI) plot for edge weights.
If you want to assess uncertainty in edge parameters, you’ll need to use R and the plot_parameterHDI() function from easybgm.
We expect this feature to be added in a future JASP release.
Centrality measures attempt to quantify how “important” a node is in the network. For example, how often it sits on the shortest path between other nodes (betweenness), or how connected it is overall (degree).
While popular, centrality metrics in psychological networks have been criticized for their instability and unclear interpretation, especially in small or highly interconnected networks.
The nice thing about the Bayesian approach is that it provides credible intervals for these centrality indices, giving you a sense of how uncertain these summaries are.
plot_centrality(fit)⚠️ Caution: Interpret centrality values as descriptive summaries — not causal influence or intervention targets. Use the credible intervals to judge whether centrality differences are meaningful.
Priors allow us to encode assumptions about the network before seeing the data. These assumptions affect:
Some prior choices in easybgm to investigate:
| Prior Type | Argument Example | Interpretation |
|---|---|---|
| Structure prior | inclusion_probability = 0.1 |
Sparse network (few edges) |
| Structure prior | edge_prior = "Beta-Bernoulli" |
Allows data-driven sparsity tuning |
| Parameter prior | interaction_scale = 0.25 |
Shrinks estimates toward zero (informative) |
| Parameter prior | interaction_scale = 2.5 |
Allows large edge effects (diffuse, default) |
Tip: Try multiple priors and see how your conclusions change. This is a key part of Bayesian modeling.
Let’s explore how different structure priors influence the analysis using two assumptions: one sparse and one dense.
#install.packages("bgms")
library("bgms")Estimate two networks on the first five ordinal variables (i.e., data[, 4:8] in the dataset) using the bgms package:
inclusion_probability = 0.1inclusion_probability = 0.7Then use summary() and plot_edgeevidence() to compare:
✅ Which edges are included or excluded?
✅ Does the posterior structure probability change?
# Sparse prior model
fit_sparse <- easybgm(
data = data[, 4:8],
type = "ordinal",
package = "bgms",
iter = 1e4,
inclusion_probability = 0.1,
save = TRUE
)
# Dense prior model
fit_dense <- easybgm(
data = data[, 4:8],
type = "ordinal",
package = "bgms",
iter = 1e4,
inclusion_probability = 0.7,
save = TRUE
)summary(fit_sparse)---
AGGREGATED EDGE OVERVIEW
Number of edges with sufficient evidence for inclusion: 6
Number of edges with insufficient evidence: 1
Number of edges with sufficient evidence for exclusion: 3
Number of possible edges: 10
---
STRUCTURE OVERVIEW
Number of visited structures: 4
Number of possible structures: 1024
Posterior probability of most likely structure: 0.986
---summary(fit_dense)---
AGGREGATED EDGE OVERVIEW
Number of edges with sufficient evidence for inclusion: 6
Number of edges with insufficient evidence: 0
Number of edges with sufficient evidence for exclusion: 4
Number of possible edges: 10
---
STRUCTURE OVERVIEW
Number of visited structures: 9
Number of possible structures: 1024
Posterior probability of most likely structure: 0.892
--- # Compare edge plots
par(mfrow = c(1, 2))
plot_edgeevidence(fit_sparse, main = "Sparse prior")
plot_edgeevidence(fit_dense, main = "Dense prior")You’ll try this yourself in Part III – Optional Part 4: Prior Robustness.
You’ve now explored the full Bayesian workflow:
✅ Testing for edge inclusion using Bayes factors
✅ Estimating edge parameters and visualizing uncertainty
✅ Exploring the influence of priors on network conclusions
✅ Doing all of this in both R and JASP
In Part III, you’ll put these skills into practice: - Analyzing new datasets
- Interpreting edge evidence
- Running prior sensitivity checks
- Communicating your results clearly
Whether you work in code or use the JASP interface, everything you need is already in your hands!